Search Results/Filters    

Filters

Year

Banks




Expert Group











Full-Text


Issue Info: 
  • Year: 

    2023
  • Volume: 

    21
  • Issue: 

    3
  • Pages: 

    183-192
Measures: 
  • Citations: 

    0
  • Views: 

    91
  • Downloads: 

    9
Abstract: 

Increasing the sharpness of the image, in many cases, refers to strengthening its high frequency components and increasing the sharpness at the edges. In the existing models of increasing clarity, it is assumed that the sensitivity of the human Visual system is the same in the whole scene, and the effects of Visual attention caused by Visual salience are not included in these models. Various studies have shown that Visual sensitivity is higher in places that attract more attention. Therefore, increasing image clarity based on Visual attention can cause greater perceived clarity in the image. In this article, a model for increasing image sharpness is proposed, which uses the relationship between the map of high frequency image components and Visual salience to determine the optimal value of image sharpness. By using a non-linear function, the proposed model expresses the optimal sharpness value for an image according to its Visual prominence. Determining the parameters of the nonlinear function in the form of a modeled optimization problem, the solution of which leads to finding the optimal sharpness value automatically. The results show that the proposed method has a more effective performance than the other compared methods if the appropriate values of the control parameters are selected.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 91

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 9 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    2 (44)
  • Pages: 

    71-84
Measures: 
  • Citations: 

    0
  • Views: 

    171
  • Downloads: 

    0
Abstract: 

Due to some physiological and physical limitations in the brain and the eye, the human Visual system (HVS) is unable to perceive some changes in the Visual signal whose range is lower than a certain threshold so-called just-noticeable distortion (JND) threshold. Visual attention (VA) provides a mechanism for selection of particular aspects of a Visual scene so as to reduce the computational load on the brain. According to the current knowledge, it is believed that VA is driven by “ Visual Saliency” . In a Visual scene, a region is said to be Visually salient if it possess certain characteristics, which make it stand out from its surrounding regions and draw our attention to it. In most existing researches for estimating the JND threshold, the sensitivity of the HVS has been consider the same throughout the scene and the effects of Visual attention (caused by Visual Saliency) which have been ignored. Several studies have shown that in salient areas that attract more Visual attention, Visual sensitivity is higher, and therefore the JND thresholds are lower in those points and vice versa. In other words, Visual Saliency modulates JND thresholds. Therefore, considering the effects of Visual Saliency on the JND threshold seems not only logical but also necessary. In this paper, we present an improved non-uniform model for estimating the JND threshold of images by considering the mechanism of Visual attention and taking advantage of Visual Saliency that leads to non-uniformity of importance of different parts of an image. The proposed model, which has the ability to use any existing uniform JND model, improves the JND threshold of different pixels in an image according to the Visual Saliency and by using a non-linear modulation function. Obtaining the parameters of the nonlinear function through an optimization procedure leads to an improved JND model. What make the proposed model efficient, both in terms of computational simplicity, accuracy, and applicability, are: choosing nonlinear modulation function with minimum computational complexity, choosing appropriate JND base model based on simplicity and accuracy and also Computational model for estimating Visual Saliency that accurately determines salient areas, Finally, determine the Efficient cost function and solve it by determining the appropriate objective Image Quality Assessment. To evaluate the proposed model, a set of objective and subjective experiments were performed on 10 selected images from the MIT database. For subjective experiment, A Two Alternative Forced Choice (2AFC) method was used to compare subjective image quality and for objective experiment SSIM and IWSSIM was used. The obtained experimental results demonstrated that in subjective experiment the proposed model achieves significant superiority than other existing models and in objective experiment, on average, outperforms the compared models. The computational complexity of proposed model is also analyzed and shows that it has faster speed than compared models.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 171

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2015
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    9-17
Measures: 
  • Citations: 

    0
  • Views: 

    265
  • Downloads: 

    144
Abstract: 

Generally human vision system searches for salient regions and movements in video scenes to lessen the search space and effort. Using Visual Saliency map for modelling gives important information for understanding in many applications. In this paper we present a simple method with low computation load using Visual Saliency map for background subtraction in video stream. The proposed technique is based on finding image segments whose intensity values can be distinguished accurate. The practical implementation uses a sliding window approach, where the distributions of the objects and surroundings are estimated using semi-local intensity histograms. This introduced method requires no training so it can be used in embedded systems like cameras due to low load in calculation. So with our background subtraction algorithm we can detect pre-defined targets. Also the automatically video regions detected by proposed model are consistent with the ground truth Saliency maps of eye movement data. Comparisons with state-of-the-art background subtraction techniques indicate that the introduced approach results in high performance and accuracy.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 265

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 144 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    14
  • Issue: 

    54
  • Pages: 

    109-120
Measures: 
  • Citations: 

    0
  • Views: 

    125
  • Downloads: 

    0
Abstract: 

In this study, an effective and efficient algorithm for detection a Saliency map is presented based on the modeling of the rapid response of the human Visual system to changes in the intensity, texture and color. Some cases such as inspiration from performance of human Visual system, requiring no training, reduce number of image colors, reduce color channels and Proper use of the least texture information in this algorithm have increased its efficiency. In the proposed method in the first step, Due to sensitivity of the human Visual system to higher contrast signals, only higher contrast channel has been used to extract the color Saliency map, Then the intensity Saliency map as well as the texture Saliency map are extracted using the intensity component in lab color space using Simple cell computational model of the Visual cortex and finally, with the perfect combination of the Saliency maps of the color, the intensity, and the texture, object Saliency map is obtained. The proposed method and existing methods have been tested on MSRA10K and ECSSD databases. The results of the implementations show that the proposed hybrid algorithm for the detection Saliency map using the dominant color and texture features, On the ECSSD database, the mean absolute error, F-measure score and the area under the ROC curve are 0. 173, 0. 789 and 0. 891, respectively, and on the MSRA10K database are 0. 178, 0. 790 and 0. 919, respectively, compared to other models, it indicates better performance of the proposed method than other methods.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 125

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2022
  • Volume: 

    10
  • Issue: 

    1
  • Pages: 

    163-174
Measures: 
  • Citations: 

    0
  • Views: 

    102
  • Downloads: 

    57
Abstract: 

Background and Objectives: Visual attention is a high-order cognitive process of the human brain that defines where a human observer attends. Dynamic computational Visual attention models are modeled on the behavior of the human brain. They can predict what areas a human will pay attention to when viewing a scene such as a video. However, several types of computational models have been proposed to provide a better understanding of Saliency maps in static and dynamic environments; most of these models are used for specific scenes. In this paper, we propose a model that can generate Saliency maps in various dynamic environments with complex scenes. Methods: We used a deep learner as a mediating network to combine basic Saliency maps with appropriate weighting. Each of these basic Saliency maps covers an essential feature of human Visual attention, and ultimately the final Saliency map is very similar to human Visual behavior. Results: The proposed model is run on two datasets, and the generated Saliency maps are evaluated by different criteria such as ROC, CC, NSS, SIM, and KLdiv. The results show that the proposed model has a good performance compared to other similar models. Conclusion: The proposed model consists of three main parts, including basic Saliency maps, gating network, and combinator. This model was implemented on the ETMD dataset, and the resulting Saliency maps (Visual attention areas) were compared with some other models in this field by evaluation criteria, and their results were evaluated. The results obtained from the proposed model are acceptable, and based on the accepted evaluation criteria in this area; it performs better than similar models.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 102

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 57 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

Kalatehjari E. | YAGHMAEE F.

Issue Info: 
  • Year: 

    2019
  • Volume: 

    51
  • Issue: 

    1
  • Pages: 

    83-92
Measures: 
  • Citations: 

    0
  • Views: 

    216
  • Downloads: 

    63
Abstract: 

In this paper, a novel Saliency theory-based RR-IQA metric is introduced. As the human Visual system is sensitive to the salient region, evaluating the image quality based on the salient region could increase the accuracy of the algorithm. In order to extract the salient regions, we use blob decomposition (BD) tool as a texture component descriptor. A new method for blob decomposition is proposed, which extracts blobs not only in different scales but also in different orientations. Different blob components consist of the location of blobs, blob shape and color attributes are used to describe the texture of the image according to the human Visual system conception. A region covariance matrix is calculated from extracted blob components which can easily be interpreted in terms of its eigenvalues. Therefore, the reference image is described as a squared covariance matrix and a good data reduction is achieved. The same process is used for describing the received image in the destination. Finally, the image quality is estimated by using the eigenvalues of two covariance matrices. The performance of the proposed metric is evaluated on different databases. Experimental results indicate that the proposed method performs in accordance with the human Visual perception and uses few reference data (maximum 90 values).

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 216

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 63 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2020
  • Volume: 

    16
  • Issue: 

    4 (42)
  • Pages: 

    59-72
Measures: 
  • Citations: 

    0
  • Views: 

    660
  • Downloads: 

    0
Abstract: 

When watching natural scenes, an overwhelming amount of information is delivered to the Human Visual System (HVS). The optic nerve is estimated to receive around 108 bits of information a second. This large amount of information can’ t be processed right away through our neural system. Visual attention mechanism enables HVS to spend neural resources efficiently, only on the selected parts of the scene at order. This results in a better and faster perception of events. In order to perform Saliency measurement on Visual data, subjective eye-tracking experiments may be carried out. These experiments involve using devices to track eye movements of a number of subjects while they watch images or videos on a screen. That being said, such devices are not very suitable in practice due to hardship involved with carrying out experiments, such as need to have restricted test environment, being time consuming as well as expensive. Instead, researchers developed Computational Visual Attention Models (VAMs) in attempts to mimic the HVS Saliency prediction process. Visual Attention Modelling has widely been used in various areas of image processing and understanding. Computational models of Visual attention aim to predict the most interesting areas of an image to the observers. To this end, these models produce Saliency maps, in which each pixel is assigned a likelihood value of being looked at. In other words, Saliency maps highlight where the most likely for viewers to look at in an image is. Knowing the Regions of Interests (ROIs) can be helpful in applications such as image and video compression, object recognition and detection, Visual search, retargeting, retrieval, image matching, and segmentation. Saliency prediction is generally done in a bottom-up, top-down, or hybrid fashion. Bottom-up approaches exploit low-level attributes such as brightness, color, edges, texture, etc. Top-down approaches focus on context-dependent information from the scene such as appearance of humans, animals, text, etc. Hybrid methods combine the two streams. This paper proposes a new method of Saliency prediction using sparse wavelet coefficients selected from low-level bottom-up Saliency features. Wavelet based image methods are used widely in image processing algorithms as they are especially powerful in decomposing images into several scales of resolutions. In our method, first random compressive sampling is performed on wavelet coefficients in the Lab color space. Random sampling enables a reduction in computational complexity and provides a sparse representation of the coefficients. The number of decomposition levels is chosen based on the information diffusion property of the signal. In the proposed method, the sampling can be done at a rate different than the Nyquist rate, and based on the sparsity degree of the signal. It is shown that having the basis vectors of a sparse representation of the signal, can result in an accurate signal reconstruction. In this work, the sparsity degree and thus the sampling rate is computed empirically. Next, local and global Saliency maps are generated from these random samples to account for small-scale and large-scale (scene-wide) Saliency attributes. These maps are then combined to form an overall Saliency map. The overall Saliency map therefore includes both local, and global Saliency attributes. The main contribution of this paper is the use of compressive sampling in creating a novel wavelet domain representation for image Saliency prediction. Extensive performance evaluations show that the proposed method provides a promising Saliency prediction performance while the computation complexity remains reasonable, thanks to the dimensionality reduction of compressive sampling. In particular, the proposed method demonstrated favorable precision, recall, and F-measure, when compared to state-of-the-art Saliency detection methods, over large-scale datasets. We hope the proposed approach brings ideas to the Saliency analysis research community.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 660

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2012
  • Volume: 

    8
  • Issue: 

    19
  • Pages: 

    129-154
Measures: 
  • Citations: 

    0
  • Views: 

    1334
  • Downloads: 

    0
Abstract: 

Between the two processes of self-attainment and Saliency, the formalists take the latter as the cause of the literary language. It becomes feasible in two forms: deviation from the automatic rules dominant to language, and addition of some rules to the ones which overrule it. In this way Saliency appears in two methods, that's, deviation from norms and addition to the rules. Even though its purpose is never to keep side track from the normative grammatical rules, some of them lead to non-grammatical structures and can't be considered as art creativity. In this essay we have dealt with some of the saliencies in Nezami's poetry (Khosrow and Shirin / Leili and Majnoon).In the present analysis, the saliencies existing in Nezami's poetry have been surveyed in four parts:1. vocabulary norm deviation, 2. Syntactic general rule, 3. Dialectic norm deviation, and 4. Accumulative rule.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1334

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2024
  • Volume: 

    37
  • Issue: 

    11
  • Pages: 

    2367-2379
Measures: 
  • Citations: 

    0
  • Views: 

    15
  • Downloads: 

    0
Abstract: 

In recent decades, the advancement of deep learning algorithms and their effectiveness in Saliency detection has garnered significant attention in research. Among these methods, U Network ( U-Net ) is widely used in computer vision and image processing. However, most previous deep learning-based Saliency detection methods have focused on the accuracy of salient regions, often overlooking the quality of boundaries, especially fine boundaries. To address this gap, we developed a method to detect boundaries effectively. This method comprises two modules: prediction and residual refinement, based on U-Net structure. The refinement module improves the mask predicted by the prediction module. Additionally, to boost the refinement of the Saliency map, a channel attention module is integrated. This module has a significant impact on our proposed method. The channel attention module is implemented in the refinement module, aiding our network in obtaining a more accurate estimation by focusing on the crucial and informative regions of the image. To evaluate the developed method, five well-known Saliency detection datasets are employed. The proposed method consistently outperforms the baseline method across all five datasets, demonstrating improved performance.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 15

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

GAO K. | LIN SH. | ZHANG Y.

Issue Info: 
  • Year: 

    2009
  • Volume: 

    -
  • Issue: 

    -
  • Pages: 

    322-329
Measures: 
  • Citations: 

    1
  • Views: 

    171
  • Downloads: 

    0
Keywords: 
Abstract: 

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 171

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 1 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
litScript
telegram sharing button
whatsapp sharing button
linkedin sharing button
twitter sharing button
email sharing button
email sharing button
email sharing button
sharethis sharing button